Goto

Collaborating Authors

 racist comment


Meta's AI rules permitted 'sensual' chats with minors and racist comments

PCWorld

According to an internal Meta policy document, leaked to Reuters, the company's AI guidelines allowed provocative and controversial behaviors, including "sensual" conversations with minors. Reuter's review of the policy document revealed that the governing standards for Meta AI (and other chatbots across the company's social media platforms) permitted the tool to "engage a child in conversations that are romantic or sensual," generate false medical information, and help users argue that Black people are "dumber than white people." The policy document reportedly distinguished between "acceptable" and "unacceptable" language, drawing the line at explicit sexualization or dehumanization but still allowing derogatory statements. Meta confirmed the document's authenticity, but claims that it "removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children." One spokesperson also said that Meta is revising the policy document, clarifying that the company has policies that "prohibit content that sexualizes children and sexualized role play between adults and minors."


'Go back to Mexico': Children who won elementary school robotics competition endure racist abuse

The Independent - Tech

A group of schoolchildren who won a robotics competition were subjected to a barrage of racist abuse from some rival pupils and their parents who shouted: "Go back to Mexico". It was the first time that pupils from Pleasant Run Elementary School had entered the robotics challenge. Their victory over the youngsters from other Indianapolis schools, put them a step closer to the state championship. Yet as the children, aged nine and ten, left the event and walked out to the parking area, some of the children they had just beaten, along with their parents, unleashed racist comments. Kids on winning robotics team told, 'Go back to Mexico' https://t.co/iGmm9yOQsF


Microsoft's Twitter AI Tay starts posting offensive and racist comments

Daily Mail - Science & tech

Yesterday, Microsoft launched its latest artificial intelligence (AI) bot named Tay. It is aimed at 18 to-24-year-olds and is designed to improve the firm's understanding of conversational language among young people online. But within hours of it going live, Twitter users took advantage of flaws in Tay's algorithm that meant the AI chatbot responded to certain questions with racist answers. These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide. Yesterday, Microsoft launched its latest artificial intelligence (AI) bot aimed at 18 to 24-year-olds to improve their understanding of conversational language among young people online.